Lifelong Variational Autoencoder via Online Adversarial Expansion Strategy

نویسندگان

چکیده

The Variational Autoencoder (VAE) suffers from a significant loss of information when trained on non-stationary data distribution. This in VAE models, called catastrophic forgetting, has not been studied theoretically before. We analyse the forgetting behaviour continual generative modelling by developing new lower bound likelihood, which interprets process as an increase probability distance between generator's distribution and evolved proposed shows that VAE-based dynamic expansion model can achieve better performance if its capacity increases appropriately considering shift Based this analysis, we propose novel criterion aims to preserve diversity among components, while ensuring it acquires more knowledge with fewer parameters. Specifically, implement perspective multi-player game Online Adversarial Expansion Strategy (OAES), considers all previously learned components well currently updated component multiple players game, adversary evaluates their performance. OAES dynamically estimate discrepancy each player without accessing task information. leads gradual addition them. show empirically extension strategy enable best given appropriate size.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Symmetric Variational Autoencoder

A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: (i) from observed data fed through the encoder to yield codes, and (ii) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and late...

متن کامل

Symmetric Variational Autoencoder and Connections to Adversarial Learning

A new form of the variational autoencoder (VAE) is proposed, based on the symmetric KullbackLeibler divergence. It is demonstrated that learning of the resulting symmetric VAE (sVAE) has close connections to previously developed adversarial-learning methods. This relationship helps unify the previously distinct techniques of VAE and adversarially learning, and provides insights that allow us to...

متن کامل

Generative Adversarial Autoencoder Networks

We introduce an effective model to overcome the problem of mode collapse when training Generative Adversarial Networks (GAN). Firstly, we propose a new generator objective that finds it better to tackle mode collapse. And, we apply an independent Autoencoders (AE) to constrain the generator and consider its reconstructed samples as “real” samples to slow down the convergence of discriminator th...

متن کامل

Variational Lossy Autoencoder

Representation learning seeks to expose certain aspects of observed data in a learned representation that’s amenable to downstream tasks like classification. For instance, a good representation for 2D images might be one that describes only global structure and discards information about detailed texture. In this paper, we present a simple but principled method to learn such global representati...

متن کامل

Quantum Variational Autoencoder

Variational autoencoders (VAEs) are powerful generative models with the salient ability to perform inference. Here, we introduce a quantum variational autoencoder (QVAE): a VAE whose latent generative process is implemented as a quantum Boltzmann machine (QBM). We show that our model can be trained end-to-end by maximizing a well-defined loss-function: a “quantum” lowerbound to a variational ap...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Proceedings of the ... AAAI Conference on Artificial Intelligence

سال: 2023

ISSN: ['2159-5399', '2374-3468']

DOI: https://doi.org/10.1609/aaai.v37i9.26293